19,008 research outputs found

    Bayesian networks as a decision support tool for rural water supply and sanitation sector

    Get PDF
    Despite the efforts made towards the Millennium Development Goals targets during the last decade, still millions of people across the world lack of improved access to water supply or basic sanitation. The increasing complexity of the context in which these services are delivered is not properly captured by the conventional approaches that pursue to assess water, sanitation and hygiene (WaSH) interventions. Instead, a holistic framework is required to integrate the wide range of aspects which are influencing sustainable and equitable provision of safe water and sanitation, especially to those in vulnerable situations. In this context, the WaSH Poverty Index (WaSH-PI) was adopted, as a multi-dimensional policy tool that tackles the links between access to basic services and the socio-economic drivers of poverty. Nevertheless, this approach does not fully describe the increasing interdependency of the reality. For this reason, appropriate Decision Support Systems (DSS) are required to i) inform about the results achieved in past and current interventions, and to ii) determine expected impacts of future initiatives, particularly taking into account envisaged investments to reach the targets set by the Sustainable Development Goals (SDGs). This would provide decision-makers with adequate information to define strategies and actions that are efficient, effective, and sustainable. This master thesis explores the use of object-oriented Bayesian networks (ooBn) as a powerful instrument to support project planning and monitoring, as well as targeting and prioritization. Based on WaSH-PI theoretical framework, a simple ooBn model has been developed and applied to reflect the main issues that determine access to safe water, sanitation and hygiene. A case study is presented in Kenya, where the Government launched in 2008 a national program aimed to increase the access to improved water, sanitation and hygiene in 22 of the 47 existing districts. Main impacts resulted from this initiative are assessed and compared against the initial situation. This research concludes that the proposed approach is able to accommodate the conditions at different scales, at the same time that reflects the complexities of WaSH-related issues. Additionally, this DSS represents an effective management tool to support decisionmakers to formulate informed choices between alternative actions

    Does median filtering truly preserve edges better than linear filtering?

    Full text link
    Image processing researchers commonly assert that "median filtering is better than linear filtering for removing noise in the presence of edges." Using a straightforward large-nn decision-theory framework, this folk-theorem is seen to be false in general. We show that median filtering and linear filtering have similar asymptotic worst-case mean-squared error (MSE) when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal. To see dramatic benefits of median smoothing in an asymptotic setting, the per-pixel noise level should tend to zero (i.e., SNR should grow very large). We show that a two-stage median filtering using two very different window widths can dramatically outperform traditional linear and median filtering in settings where the underlying object has edges. In this two-stage procedure, the first pass, at a fine scale, aims at increasing the SNR. The second pass, at a coarser scale, correctly exploits the nonlinearity of the median. Image processing methods based on nonlinear partial differential equations (PDEs) are often said to improve on linear filtering in the presence of edges. Such methods seem difficult to analyze rigorously in a decision-theoretic framework. A popular example is mean curvature motion (MCM), which is formally a kind of iterated median filtering. Our results on iterated median filtering suggest that some PDE-based methods are candidates to rigorously outperform linear filtering in an asymptotic framework.Comment: Published in at http://dx.doi.org/10.1214/08-AOS604 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the limits of engine analysis for cheating detection in chess

    Get PDF
    The integrity of online games has important economic consequences for both the gaming industry and players of all levels, from professionals to amateurs. Where there is a high likelihood of cheating, there is a loss of trust and players will be reluctant to participate — particularly if this is likely to cost them money. Chess is a game that has been established online for around 25 years and is played over the Internet commercially. In that environment, where players are not physically present “over the board” (OTB), chess is one of the most easily exploitable games by those who wish to cheat, because of the widespread availability of very strong chess-playing programs. Allegations of cheating even in OTB games have increased significantly in recent years, and even led to recent changes in the laws of the game that potentially impinge upon players’ privacy. In this work, we examine some of the difficulties inherent in identifying the covert use of chess-playing programs purely from an analysis of the moves of a game. Our approach is to deeply examine a large collection of games where there is confidence that cheating has not taken place, and analyse those that could be easily misclassified. We conclude that there is a serious risk of finding numerous “false positives” and that, in general, it is unsafe to use just the moves of a single game as prima facie evidence of cheating. We also demonstrate that it is impossible to compute definitive values of the figures currently employed to measure similarity to a chess-engine for a particular game, as values inevitably vary at different depths and, even under identical conditions, when multi-threading evaluation is used

    Scaling cosmology with variable dark-energy equation of state

    Full text link
    Interactions between dark matter and dark energy which result in a power-law behavior (with respect to the cosmic scale factor) of the ratio between the energy densities of the dark components (thus generalizing the LCDM model) have been considered as an attempt to alleviate the cosmic coincidence problem phenomenologically. We generalize this approach by allowing for a variable equation of state for the dark energy within the CPL-parametrization. Based on analytic solutions for the Hubble rate and using the Constitution and Union2 SNIa sets, we present a statistical analysis and classify different interacting and non-interacting models according to the Akaike (AIC) and the Bayesian (BIC) information criteria. We do not find noticeable evidence for an alleviation of the coincidence problem with the mentioned type of interaction.Comment: 21 pages, 11 figures, 11 tables, discussion improve

    On the Universality of Inner Black Hole Mechanics and Higher Curvature Gravity

    Get PDF
    Black holes are famous for their universal behavior. New thermodynamic relations have been found recently for the product of gravitational entropies over all the horizons of a given stationary black hole. This product has been found to be independent of the mass for all such solutions of Einstein-Maxwell theory in d=4,5. We study the universality of this mass independence by introducing a number of possible higher curvature corrections to the gravitational action. We consider finite temperature black holes with both asymptotically flat and (A)dS boundary conditions. Although we find examples for which mass independence of the horizon entropy product continues to hold, we show that the universality of this property fails in general. We also derive further thermodynamic properties of inner horizons, such as the first law and Smarr relation, in the higher curvature theories under consideration, as well as a set of relations between thermodynamic potentials on the inner and outer horizons that follow from the horizon entropy product, whether or not it is mass independent.Comment: 26 page

    Optimization of the ionization time of an atom with tailored laser pulses: a theoretical study

    Get PDF
    How fast can a laser pulse ionize an atom? We address this question by considering pulses that carry a fixed time-integrated energy per-area, and finding those that achieve the double requirement of maximizing the ionization that they induce, while having the shortest duration. We formulate this double-objective quantum optimal control problem by making use of the Pareto approach to multi-objetive optimization, and the differential evolution genetic algorithm. The goal is to find out how much a precise time-profiling of ultra-fast, large-bandwidth pulses may speed up the ionization process with respect to simple-shape pulses. We work on a simple one-dimensional model of hydrogen-like atoms (the P\"oschl-Teller potential), that allows to tune the number of bound states that play a role in the ionization dynamics. We show how the detailed shape of the pulse accelerates the ionization process, and how the presence or absence of bound states influences the velocity of the process
    • …
    corecore